47 - Recap Clip 8.9: Computational Learning Theory (Part 1) [ID:30450]
45 von 45 angezeigt

Basically, what we started doing now is what we now think of as machine learning.

First we looked at some theory and this is kind of the touring machines of learning in

a way, right?

Asking all kinds of questions and getting negative answers.

Just like we have the halting problem in theoretical computer science where it basically says not

everything is computable and the things that are are kind of boring.

Here we have a couple of theoretical ideas which are essentially under certain reasonable

hypotheses. Learning is impossible because we have to show the learning algorithm.

Essentially all examples there are.

The thing that this base is based on is this idea of learning being probably or hypotheses

being approximately correct.

Meaning if you have a hypothesis that's way off, really bad, then you'll find it sooner

or later by the examples.

Any learning algorithm that's a PAC learning algorithm that has this kind of converging

towards the right hypotheses, that's all this is.

Any algorithm essentially that does really learn is a PAC learning algorithm no matter

how you do it.

Anything that computes is really a touring machine.

That's the analogy here.

Some kind of a stationarity assumption without which learning makes no sense.

Under these assumptions you can prove and convince yourself that essentially PAC learning

algorithms need to see all the examples.

Unless you make certain assumptions.

Unless you know something.

If you do that for general hypothesis spaces then PAC learning needs to see all examples.

But if you have a good guess saying oh yes, everything needs to be not just a Boolean

function but a Boolean polynomial of size restriction, something or the other.

Or it's not just any computable function, we only look at polynomials of even degrees

and so on.

Then of course the learnable hypothesis space is smaller than the space of all possible

examples.

Then this works.

Of course there's a problem here.

Namely that you may lose realisability.

You may just not have any hypotheses in that restricted space.

But in a way restricting the hypothesis space is a way of putting knowledge into the learning.

You have external knowledge saying oh it must be one of those.

Or oh I don't care if it's a perfect match.

One from this set satisfies me.

So I'm saying here that there's three ways out of the dilemma.

They're actually all the same essentially.

Going for learnable subsets of the hypothesis space is a way of putting in knowledge.

Might not be a very structured or easy way.

But you basically put in the knowledge that it must be in there or if I find one in the

subspace then that's sufficiently good.

Teil eines Kapitels:
Recaps

Zugänglich über

Offener Zugang

Dauer

00:05:01 Min

Aufnahmedatum

2021-03-30

Hochgeladen am

2021-03-31 11:17:51

Sprache

en-US

Recap: Computational Learning Theory (Part 1)

Main video on the topic in chapter 8 clip 9.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen